Recent large-scale image generation models such as Stable Diffusion have exhibited an impressive ability to generate fairly realistic images starting from a very simple text prompt. Could such models render real images obsolete for training image prediction models? In this paper, we answer part of this provocative question by questioning the need for real images when training models for ImageNet classification. More precisely, provided only with the class names that have been used to build the dataset, we explore the ability of Stable Diffusion to generate synthetic clones of ImageNet and measure how useful they are for training classification models from scratch. We show that with minimal and class-agnostic prompt engineering those ImageNet clones we denote as ImageNet-SD are able to close a large part of the gap between models produced by synthetic images and models trained with real images for the several standard classification benchmarks that we consider in this study. More importantly, we show that models trained on synthetic images exhibit strong generalization properties and perform on par with models trained on real data.
translated by 谷歌翻译
培训视频中人类姿势估计的最先进模型需要具有很难获得的注释的数据集。尽管最近已将变压器用于身体姿势序列建模,但相关方法依靠伪地真相来增强目前有限的培训数据可用于学习此类模型。在本文中,我们介绍了Posebert,Posebert是一个通过掩盖建模对3D运动捕获(MOCAP)数据进行全面训练的变压器模块。它是简单,通用和通用的,因为它可以插入任何基于图像的模型的顶部,以在基于视频的模型中使用时间信息。我们展示了Posebert的变体,不同的输入从3D骨骼关键点到全身或仅仅是手(Mano)的3D参数模型的旋转。由于Posebert培训是任务不可知论的,因此该模型可以应用于姿势细化,未来的姿势预测或运动完成等几个任务。我们的实验结果验证了在各种最新姿势估计方法之上添加Posebert始终提高其性能,而其低计算成本使我们能够在实时演示中使用它,以通过A的机器人手使机器人手通过摄像头。可以在https://github.com/naver/posebert上获得测试代码和型号。
translated by 谷歌翻译
我们考虑在给定的分类任务(例如Imagenet-1k(IN1K))上训练深神网络的问题,以便它在该任务以及其他(未来)转移任务方面擅长。这两个看似矛盾的属性在改善模型的概括的同时保持其在原始任务上的性能之间实现了权衡。接受自我监督学习训练的模型(SSL)倾向于比其受监督的转移学习更好地概括。但是,他们仍然落后于In1k上的监督模型。在本文中,我们提出了一个有监督的学习设置,以利用两全其美的方式。我们使用最近的SSL模型的两个关键组成部分丰富了普通的监督培训框架:多尺度农作物用于数据增强和使用可消耗的投影仪。我们用内存库在即时计算的类原型中代替了班级权重的最后一层。我们表明,这三个改进导致IN1K培训任务和13个转移任务之间的权衡取决于更加有利的权衡。在所有探索的配置中,我们都会挑出两种模型:T-Rex实现了转移学习的新状态,并且超过了In1k上的Dino和Paws等最佳方法,以及与高度优化的RSB--相匹配的T-Rex*在IN1K上的A1模型,同时在转移任务上表现更好。项目页面和预估计的模型:https://europe.naverlabs.com/t-rex
translated by 谷歌翻译
降低降低方法是无监督的方法,它学习了低维空间,在这些方法中,初始空间的某些特性(通常是“邻居”的概念)被保留。这种方法通常需要在大的K-NN图或复杂的优化求解器上传播。另一方面,通常用于从头开始学习表示形式,依靠简单,更可扩展的框架来学习的自我监督学习方法。在本文中,我们提出了TLDR,这是通用输入空间的一种降低方法,该方法正在移植Zbontar等人的最新自我监督学习框架。 (2021)降低维度的特定任务,超越任意表示。我们建议使用最近的邻居从训练组中构建对,并减少冗余损失,以学习在此类对之间产生表示形式的编码器。 TLDR是一种简单,易于训练和广泛适用性的方法。它由一个离线最近的邻居计算步骤组成,该步骤可以高度近似,并且是一个直接的学习过程。为了提高可伸缩性,我们专注于提高线性维度的降低,并在图像和文档检索任务上显示一致的收益,例如在Roxford上获得PCA的 +4%地图,用于GEM-AP,改善了ImageNet上的Dino的性能或以10倍的压缩保留。
translated by 谷歌翻译
Contrastive learning has become a key component of self-supervised learning approaches for computer vision. By learning to embed two augmented versions of the same image close to each other and to push the embeddings of different images apart, one can train highly transferable visual representations. As revealed by recent studies, heavy data augmentation and large sets of negatives are both crucial in learning such representations. At the same time, data mixing strategies, either at the image or the feature level, improve both supervised and semi-supervised learning by synthesizing novel examples, forcing networks to learn more robust features. In this paper, we argue that an important aspect of contrastive learning, i.e. the effect of hard negatives, has so far been neglected. To get more meaningful negative samples, current top contrastive self-supervised learning approaches either substantially increase the batch sizes, or keep very large memory banks; increasing memory requirements, however, leads to diminishing returns in terms of performance. We therefore start by delving deeper into a top-performing framework and show evidence that harder negatives are needed to facilitate better and faster learning. Based on these observations, and motivated by the success of data mixing, we propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead. We exhaustively ablate our approach on linear classification, object detection, and instance segmentation and show that employing our hard negative mixing procedure improves the quality of visual representations learned by a state-of-the-art self-supervised learning method.Project page: https://europe.naverlabs.com/mochi 34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g. by loss re-weighting, data re-sampling, or transfer learning from head-to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
Lipschitz regularized f-divergences are constructed by imposing a bound on the Lipschitz constant of the discriminator in the variational representation. They interpolate between the Wasserstein metric and f-divergences and provide a flexible family of loss functions for non-absolutely continuous (e.g. empirical) distributions, possibly with heavy tails. We construct Lipschitz regularized gradient flows on the space of probability measures based on these divergences. Examples of such gradient flows are Lipschitz regularized Fokker-Planck and porous medium partial differential equations (PDEs) for the Kullback-Leibler and alpha-divergences, respectively. The regularization corresponds to imposing a Courant-Friedrichs-Lewy numerical stability condition on the PDEs. For empirical measures, the Lipschitz regularization on gradient flows induces a numerically stable transporter/discriminator particle algorithm, where the generative particles are transported along the gradient of the discriminator. The gradient structure leads to a regularized Fisher information (particle kinetic energy) used to track the convergence of the algorithm. The Lipschitz regularized discriminator can be implemented via neural network spectral normalization and the particle algorithm generates approximate samples from possibly high-dimensional distributions known only from data. Notably, our particle algorithm can generate synthetic data even in small sample size regimes. A new data processing inequality for the regularized divergence allows us to combine our particle algorithm with representation learning, e.g. autoencoder architectures. The resulting algorithm yields markedly improved generative properties in terms of efficiency and quality of the synthetic samples. From a statistical mechanics perspective the encoding can be interpreted dynamically as learning a better mobility for the generative particles.
translated by 谷歌翻译
自我监督模型在机器学习(ML)中越来越普遍,因为它们减少了对昂贵标签数据的需求。由于它们在下游应用程序中的多功能性,它们越来越多地用作通过公共API暴露的服务。同时,由于它们输出的向量表示的高维度,这些编码器模型特别容易受到模型窃取攻击的影响。然而,编码器仍然没有防御:窃取攻击的现有缓解策略集中在监督学习上。我们介绍了一个新的数据集推理防御,该防御使用受害者编码器模型的私人培训集将其所有权归因于窃取的情况。直觉是,如果受害者从受害者那里窃取了编码器的培训数据,则在受害者的培训数据上,编码器的输出表示的对数可能比测试数据更高,但如果对其进行了独立培训,则不会。我们使用密度估计模型来计算该对数可能性。作为我们评估的一部分,我们还建议测量被盗编码器的保真度并量化盗窃检测的有效性,而无需涉及下游任务;相反,我们利用相互信息和距离测量值。我们在视觉领域中广泛的经验结果表明,数据集推断是捍卫自我监督模型免受模型窃取的有前途的方向。
translated by 谷歌翻译
与伯特(Bert)等语言模型相比,已证明知识增强语言表示的预培训模型在知识基础构建任务(即〜关系提取)中更有效。这些知识增强的语言模型将知识纳入预训练中,以生成实体或关系的表示。但是,现有方法通常用单独的嵌入表示每个实体。结果,这些方法难以代表播出的实体和大量参数,在其基础代币模型之上(即〜变压器),必须使用,并且可以处理的实体数量为由于内存限制,实践限制。此外,现有模型仍然难以同时代表实体和关系。为了解决这些问题,我们提出了一个新的预培训模型,该模型分别从图书中学习实体和关系的表示形式,并分别在文本中跨越跨度。通过使用SPAN模块有效地编码跨度,我们的模型可以代表实体及其关系,但所需的参数比现有模型更少。我们通过从Wikipedia中提取的知识图对我们的模型进行了预训练,并在广泛的监督和无监督的信息提取任务上进行了测试。结果表明,我们的模型比基线学习对实体和关系的表现更好,而在监督的设置中,微调我们的模型始终优于罗伯塔,并在信息提取任务上取得了竞争成果。
translated by 谷歌翻译